Inexact Generalized Gauss–Newton for Scaling the Canonical Polyadic Decomposition With Non-Least-Squares Cost Functions

نویسندگان

چکیده

The canonical polyadic decomposition (CPD) allows one to extract compact and interpretable representations of tensors. Several optimization-based methods exist fit the CPD a tensor for standard least-squares (LS) cost function. Extensions have been proposed more general functions such as β-divergences well. For these non-LS functions, generalized Gauss-Newton (GGN) method has developed. This is second-order that uses an approximation Hessian function determine next iterate with this algorithm, fast convergence can be achieved close solution. While it possible construct full small tensors, exact GGN approach becomes too expensive tensors larger dimensions, found in typical applications. In paper, we therefore propose use inexact provide several strategies make scalable large First, only used implicitly its multilinear structure exploited during Hessian-vector products, which greatly improves scalability method. Next, show by using compressed instance approximation, computation time lowered even more, limited influence on speed. We also dedicated preconditioners problem. Further, maximum likelihood estimator Rician distributed data examined detail example alternative useful analysis moduli complex data, functional magnetic resonance imaging, instance. compare existing demonstrate method's speed effectiveness synthetic simulated real-life data. Finally, scale randomized block sampling.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Canonical Polyadic Decomposition with Orthogonality Constraints

Canonical Polyadic Decomposition (CPD) of a higher-order tensor is an important tool in mathematical engineering. In many applications at least one of the matrix factors is constrained to be column-wise orthonormal. We first derive a relaxed condition that guarantees uniqueness of the CPD under this constraint and generalize the result to the case where one of the factor matrices has full colum...

متن کامل

A Generalized Least Squares Matrix Decomposition

Variables in high-dimensional data sets common in neuroimaging, spatial statistics, time series and genomics often exhibit complex dependencies that can arise, for example, from spatial and/or temporal processes or latent network structures. Conventional multivariate analysis techniques often ignore these relationships. We propose a generalization of the singular value decomposition that is app...

متن کامل

Canonical Polyadic Decomposition of Third-Order Tensors: Reduction to Generalized Eigenvalue Decomposition

Now, the statement (i) follows from (S.1.3) by setting y = x. (ii) Since the vectors ci1 , . . . , ciK−1 are linearly independent in R , it follows that there exists a vector y such that det [ ci1 . . . ciK−1 y ] 6= 0. Hence, by (S.1.3), the (i1, . . . , iK−1)-th column of B(C) is nonzero. (iii) follows from (S.1.3) and the fact that det [ ci1 . . . ciK−1 y ] = 0 if and only if y ∈ span{ci1 , ....

متن کامل

Canonical Polyadic Decomposition with a Columnwise Orthonormal Factor Matrix

Canonical Polyadic Decomposition (CPD) of a higher-order tensor is an important tool in mathematical engineering. In many applications at least one of the matrix factors is constrained to be column-wise orthonormal. We first derive a relaxed condition that guarantees uniqueness of the CPD under this constraint. Second, we give a simple proof of the existence of the optimal low-rank approximatio...

متن کامل

Supplement to “ A Generalized Least Squares Matrix Decomposition ”

In addition to sparseness, there is much interest in penalties that encourage smoothness, especially in the context of functional data analysis. We show how the GPMF can be used with smooth penalties and propose a generalized gradient descent method to solve for these smooth GPMF factors. Many have proposed to obtain smoothness in the factors by using a quadratic penalty. Rice and Silverman (19...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Journal of Selected Topics in Signal Processing

سال: 2021

ISSN: ['1941-0484', '1932-4553']

DOI: https://doi.org/10.1109/jstsp.2020.3045911